Goto

Collaborating Authors

 moral concern


Anti Robot Speciesism

De Freitas, Julian, Castelo, Noah, Schmitt, Bernd, Sarvary, Miklos

arXiv.org Artificial Intelligence

DATE SUBMITTED: March, 202 5 Words: 9, 22 0 2 Abstract H umanoid robots are a form of embodied artificial intelligence (AI) that look s and act s more and more like humans. Powered by generative AI and advances in robotics, humanoid robots can speak and interact with humans rather naturally but are still easily recognizable as robots. But how will we treat humanoids when they seem indistinguishable from humans in appearance and mind? We find a tendency (called "anti - robot" speciesism) to deny such robots humanlike capabilities, driven by motivations to accord members of the human species preferential treatment . Six experiments show that robots are denied humanlike attributes, simply because they are not biological beings and because humans want to avoid feelings of cognitive dissonance when utilizing such robots for unsavory tasks . Th us, pe ople do not rationally attribute capabilities to perfectly human like robots but deny them capabilities as it suits them . Keywords: robots, artificial intelligence, humanoids, speciesism, cognitive dissonance 3 In recent years, n ew artificial intelligen ce (AI) technologies have been introduced into the marketplace that have the potential to radically change people's work and lives . This paper examines how people might react to robots that seem be " perfectly human like " . With major companies like Amazon and Nvidia planning mass production of such robots, we are entering an era where the line between human and non - human entities is increasingly blurred. Our findings suggest that the advent of such robots will not lead people to rationally conclude that these robots are as capable as humans in performing some tasks . Rather, people will deny these robots humanlike attributes, driven by their motivation to prioritize their own species and to avoid feelings of cognitive dissonance from utilizing such robots for unsavory tasks. Aversion to Robots and AI People are often averse to robots. P sychological research has explained this effect by arguing that such "almost humanlike" robots appear as aesthetically dis pleasing, and that they remind people of zombies, death, or disease (Kätsyri et al., 2015; Mori, 1970; Wang et al., 2015) . Other psychological explanations focus on how people perceive robot minds, sometimes referred to as the "uncanny valley of mind" (Müller et al., 2021; Stein & Ohler, 2017) . These theories suggest that humanoid robots can be unsettling because they remind people of the human ability to experience feelings, even though these robots are not seen as having such capabilities (Gray & Wegner, 2012; Smith et al., 2021) .


What Do People Think about Sentient AI?

Anthis, Jacy Reese, Pauketat, Janet V. T., Ladak, Ali, Manoli, Aikaterina

arXiv.org Artificial Intelligence

With rapid advances in machine learning, many people in the field have been discussing the rise of digital minds and the possibility of artificial sentience. Future developments in AI capabilities and safety will depend on public opinion and human-AI interaction. To begin to fill this research gap, we present the first nationally representative survey data on the topic of sentient AI: initial results from the Artificial Intelligence, Morality, and Sentience (AIMS) survey, a preregistered and longitudinal study of U.S. public opinion that began in 2021. Across one wave of data collection in 2021 and two in 2023 (total N = 3,500), we found mind perception and moral concern for AI well-being in 2021 were higher than predicted and significantly increased in 2023: for example, 71% agree sentient AI deserve to be treated with respect, and 38% support legal rights. People have become more threatened by AI, and there is widespread opposition to new technologies: 63% support a ban on smarter-than-human AI, and 69% support a ban on sentient AI. Expected timelines are surprisingly short and shortening with a median forecast of sentient AI in only five years and artificial general intelligence in only two years. We argue that, whether or not AIs become sentient, the discussion itself may overhaul human-computer interaction and shape the future trajectory of AI technologies, including existential risks and opportunities.


The Moral Foundations Reddit Corpus

Trager, Jackson, Ziabari, Alireza S., Davani, Aida Mostafazadeh, Golazizian, Preni, Karimi-Malekabadi, Farzan, Omrani, Ali, Li, Zhihe, Kennedy, Brendan, Reimer, Nils Karl, Reyes, Melissa, Cheng, Kelsey, Wei, Mellow, Merrifield, Christina, Khosravi, Arta, Alvarez, Evans, Dehghani, Morteza

arXiv.org Artificial Intelligence

Moral framing and sentiment can affect a variety of online and offline behaviors, including donation, pro-environmental action, political engagement, and even participation in violent protests. Various computational methods in Natural Language Processing (NLP) have been used to detect moral sentiment from textual data, but in order to achieve better performances in such subjective tasks, large sets of hand-annotated training data are needed. Previous corpora annotated for moral sentiment have proven valuable, and have generated new insights both within NLP and across the social sciences, but have been limited to Twitter. To facilitate improving our understanding of the role of moral rhetoric, we present the Moral Foundations Reddit Corpus, a collection of 16,123 Reddit comments that have been curated from 12 distinct subreddits, hand-annotated by at least three trained annotators for 8 categories of moral sentiment (i.e., Care, Proportionality, Equality, Purity, Authority, Loyalty, Thin Morality, Implicit/Explicit Morality) based on the updated Moral Foundations Theory (MFT) framework. We use a range of methodologies to provide baseline moral-sentiment classification results for this new corpus, e.g., cross-domain classification and knowledge transfer.